The Future of Management in an AI World by Jordi Canals & Franz Heukamp

The Future of Management in an AI World by Jordi Canals & Franz Heukamp

Author:Jordi Canals & Franz Heukamp
Language: eng
Format: epub
ISBN: 9783030206802
Publisher: Springer International Publishing


5.7 The Limits to Optimization

Our second approach takes a different tact than trying to produce more accurate outcomes. Instead of boosting the low predictive power of many HR algorithms with measures that are not related causally to the outcomes, we propose acknowledging that these outcomes are essentially random (Denrell et al. 2015; Liu and Denrell 2018). When we have great difficulty determining which candidates will succeed in promotions, for example, rather than asserting that the process is objective (even if we cannot explain why), we might instead take a random draw among candidates with the relevant prerequisites.

Research shows that employees perceive random processes as fair in determining complex and thus uncertain outcomes (Lind and Van den Bos 2002). “Flipping a coin” has a long history as a device for settling disputes, from ties in election outcomes to allocating fishing rights (see Stone 2011). Randomization is especially attractive where there are “losers” in the outcomes and when they remain in the organization or relationship, such as employees who are not selected for promotion. Telling them that the decision literally was made on a coin toss is much easier to bear than either telling them it was a close choice (you were almost as good, on the one hand, but something small could have changed the outcome) or that it was not close (you were not almost as good, but there is nothing you could have done that would have mattered).

Closely related to the notion of fairness is explainability, in this case the extent to which employees understand the criteria used for data analytics-based decisions. A simple seniority decision rule—more senior workers get preference over less senior ones—is easy to understand and feels objective even if we do not always like its implications. A ML algorithm based on a weighted combination of 10 performance-related factors is much more difficult to understand, especially when employees make inevitable comparisons with each other and cannot see the basis of different outcomes. (Professors who have to explain to students why their grade is different than that of their friend who they believe wrote a similar answer are familiar with this problem.) Algorithms get more accurate the more complicated they are, but they also become more difficult to understand and explain.

A well-known example of the importance of explainability to users comes from the Oncology application of IBM Watson. This application met considerable resistance from oncologists because it was difficult to understand how the system was arriving at its decisions. When the application disagreed with the doctor’s assessment, this lack of transparency made it difficult for medical experts to accept and act upon the recommendations that the system produced (Bloomberg 2018). Especially in “high stakes” contexts, such as those that affect people’s lives—or their careers—explainability is likely to become imperative for the successful use of ML technologies. We expect major progress in this area in the coming years, due to a wave of investment from the commercial and government sectors geared toward explainable AI. For instance, the US



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.